75 research outputs found

    Toward a Thinking Microscope: Deep Learning in Optical Microscopy and Image Reconstruction

    Full text link
    We discuss recently emerging applications of the state-of-art deep learning methods on optical microscopy and microscopic image reconstruction, which enable new transformations among different modes and modalities of microscopic imaging, driven entirely by image data. We believe that deep learning will fundamentally change both the hardware and image reconstruction methods used in optical microscopy in a holistic manner

    Analysis of Diffractive Optical Neural Networks and Their Integration with Electronic Neural Networks

    Full text link
    Optical machine learning offers advantages in terms of power efficiency, scalability and computation speed. Recently, an optical machine learning method based on Diffractive Deep Neural Networks (D2NNs) has been introduced to execute a function as the input light diffracts through passive layers, designed by deep learning using a computer. Here we introduce improvements to D2NNs by changing the training loss function and reducing the impact of vanishing gradients in the error back-propagation step. Using five phase-only diffractive layers, we numerically achieved a classification accuracy of 97.18% and 89.13% for optical recognition of handwritten digits and fashion products, respectively; using both phase and amplitude modulation (complex-valued) at each layer, our inference performance improved to 97.81% and 89.32%, respectively. Furthermore, we report the integration of D2NNs with electronic neural networks to create hybrid-classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end D2NN with a layer-to-layer distance of a few wavelengths, also reducing the complexity of the successive electronic network. Using a 5-layer phase-only D2NN jointly-optimized with a single fully-connected electronic layer, we achieved a classification accuracy of 98.71% and 90.04% for the recognition of handwritten digits and fashion products, respectively. Moreover, the input to the electronic network was compressed by >7.8 times down to 10x10 pixels. Beyond creating low-power and high-frame rate machine learning platforms, D2NN-based hybrid neural networks will find applications in smart optical imager and sensor design.Comment: 22 pages, 5 Figures, 4 Tables, 1 Supplementary Figure, 2 Supplementary Table

    Phase recovery and holographic image reconstruction using deep learning in neural networks

    Full text link
    Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography. Here we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training. This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference related spatial artifacts. Compared to existing approaches, this neural network based method is significantly faster to compute, and reconstructs improved phase and amplitude images of the objects using only one hologram, i.e., requires less number of measurements in addition to being computationally faster. We validated this method by reconstructing phase and amplitude images of various samples, including blood and Pap smears, and tissue sections. These results are broadly applicable to any phase recovery problem, and highlight that through machine learning challenging problems in imaging science can be overcome, providing new avenues to design powerful computational imaging systems

    Scale-, shift- and rotation-invariant diffractive optical networks

    Full text link
    Recent research efforts in optical computing have gravitated towards developing optical neural networks that aim to benefit from the processing speed and parallelism of optics/photonics in machine learning applications. Among these endeavors, Diffractive Deep Neural Networks (D2NNs) harness light-matter interaction over a series of trainable surfaces, designed using deep learning, to compute a desired statistical inference task as the light waves propagate from the input plane to the output field-of-view. Although, earlier studies have demonstrated the generalization capability of diffractive optical networks to unseen data, achieving e.g., >98% image classification accuracy for handwritten digits, these previous designs are in general sensitive to the spatial scaling, translation and rotation of the input objects. Here, we demonstrate a new training strategy for diffractive networks that introduces input object translation, rotation and/or scaling during the training phase as uniformly distributed random variables to build resilience in their blind inference performance against such object transformations. This training strategy successfully guides the evolution of the diffractive optical network design towards a solution that is scale-, shift- and rotation-invariant, which is especially important and useful for dynamic machine vision applications in e.g., autonomous cars, in-vivo imaging of biomedical specimen, among others.Comment: 28 Pages, 6 Figures, 1 Tabl

    Single-shot autofocusing of microscopy images using deep learning

    Full text link
    We demonstrate a deep learning-based offline autofocusing method, termed Deep-R, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a specimen that is acquired at an arbitrary out-of-focus plane. We illustrate the efficacy of Deep-R using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Our results reveal that Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.Comment: 27 pages, 8 figures, 9 supplementary figures, 2 supplementary table

    Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue

    Full text link
    Histological staining is a vital step used to diagnose various diseases and has been used for more than a century to provide contrast to tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time-consuming, labor-intensive, expensive and destructive to the specimen. Recently, the ability to virtually-stain unlabeled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain specific deep neural networks. Here, we present a new deep learning-based framework which generates virtually-stained images using label-free tissue, where different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information at its input: (1) autofluorescence images of the label-free tissue sample, and (2) a digital staining matrix which represents the desired microscopic map of different stains to be virtually generated at the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabeled kidney tissue sections to generate micro-structured combinations of Hematoxylin and Eosin (H&E), Jones silver stain, and Masson's Trichrome stain. Using a single network, this approach multiplexes virtual staining of label-free tissue with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created on the same tissue cross-section, which is currently not feasible with standard histochemical staining methods.Comment: 19 pages, 5 figures, 2 table

    Deep learning-based super-resolution in coherent imaging systems

    Full text link
    We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. We experimentally validated the capabilities of this deep learning-based coherent imaging approach by super-resolving complex images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.Comment: 18 pages, 9 figures, 3 table

    PhaseStain: Digital staining of label-free quantitative phase microscopy images using deep learning

    Full text link
    Using a deep neural network, we demonstrate a digital staining technique, which we term PhaseStain, to transform quantitative phase images (QPI) of labelfree tissue sections into images that are equivalent to brightfield microscopy images of the same samples that are histochemically stained. Through pairs of image data (QPI and the corresponding brightfield images, acquired after staining) we train a generative adversarial network (GAN) and demonstrate the effectiveness of this virtual staining approach using sections of human skin, kidney and liver tissue, matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin, Jones' stain, and Masson's trichrome stain, respectively. This digital staining framework might further strengthen various uses of labelfree QPI techniques in pathology applications and biomedical research in general, by eliminating the need for chemical staining, reducing sample preparation related costs and saving time. Our results provide a powerful example of some of the unique opportunities created by data driven image transformations enabled by deep learning

    Class-specific Differential Detection in Diffractive Optical Neural Networks Improves Inference Accuracy

    Full text link
    Diffractive deep neural networks have been introduced earlier as an optical machine learning framework that uses task-specific diffractive surfaces designed by deep learning to all-optically perform inference, achieving promising performance for object classification and imaging. Here we demonstrate systematic improvements in diffractive optical neural networks based on a differential measurement technique that mitigates the non-negativity constraint of light intensity. In this scheme, each class is assigned to a separate pair of photodetectors, behind a diffractive network, and the class inference is made by maximizing the normalized signal difference between the detector pairs. Moreover, by utilizing the inherent parallelization capability of optical systems, we reduced the signal coupling between the positive and negative detectors of each class by dividing their optical path into two jointly-trained diffractive neural networks that work in parallel. We further made use of this parallelization approach, and divided individual classes among multiple jointly-trained differential diffractive neural networks. Using this class-specific differential detection in jointly-optimized diffractive networks, our simulations achieved testing accuracies of 98.52%, 91.48% and 50.82% for MNIST, Fashion-MNIST and grayscale CIFAR-10 datasets, respectively. Similar to ensemble methods practiced in machine learning, we also independently-optimized multiple differential diffractive networks that optically project their light onto a common detector plane, and achieved testing accuracies of 98.59%, 91.06% and 51.44% for MNIST, Fashion-MNIST and grayscale CIFAR-10, respectively. Through these systematic advances in designing diffractive neural networks, the reported classification accuracies set the state-of-the-art for an all-optical neural network design.Comment: 21 pages, 6 Figures, 3 Table

    Accurate color imaging of pathology slides using holography and absorbance spectrum estimation of histochemical stains

    Full text link
    Holographic microscopy presents challenges for color reproduction due to the usage of narrow-band illumination sources, which especially impacts the imaging of stained pathology slides for clinical diagnoses. Here, an accurate color holographic microscopy framework using absorbance spectrum estimation is presented. This method uses multispectral holographic images acquired and reconstructed at a small number (e.g., three to six) of wavelengths, estimates the absorbance spectrum of the sample, and projects it onto a color tristimulus. Using this method, the wavelength selection is optimized to holographically image 25 pathology slide samples with different tissue and stain combinations to significantly reduce color errors in the final reconstructed images. The results can be used as a practical guide for various imaging applications and, in particular, to correct color distortions in holographic imaging of pathology samples spanning different dyes and tissue types
    corecore